Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for aten::randperm and aten::polar #29585

Merged
merged 29 commits into from
Apr 1, 2025

Conversation

vijaykr338
Copy link
Contributor

Details:

added support for aten::randperm and tests

currently working on aten::polar and others

Tickets:

@github-actions github-actions bot added the category: PyTorch FE OpenVINO PyTorch Frontend label Mar 19, 2025
Copy link
Contributor Author

@vijaykr338 vijaykr338 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mvafin
I found out that pytorch.polar returns complex tensors meanwhile OpenVINO does not support them right now

to make it work, I separately computed real and imag parts using cos and sine then wrapped it with a ComplexTypeMark.

I will now work on str and delete

@vijaykr338 vijaykr338 marked this pull request as ready for review March 23, 2025 12:52
@vijaykr338 vijaykr338 requested a review from a team as a code owner March 23, 2025 12:52
@vijaykr338 vijaykr338 changed the title Added support for aten::randperm with tests [WIP - aten::polar] Added support for aten::randperm and aten::polar Mar 24, 2025
@mvafin
Copy link
Contributor

mvafin commented Mar 24, 2025

Also please fix code style

src/frontends/pytorch/src/op/polar.cpp:1:-#include "openvino/op/cos.hpp"
src/frontends/pytorch/src/op/polar.cpp:2:-#include "openvino/op/sin.hpp"
src/frontends/pytorch/src/op/polar.cpp:3:-#include "openvino/op/multiply.hpp"
src/frontends/pytorch/src/op/polar.cpp:4:-#include "openvino/op/concat.hpp"
src/frontends/pytorch/src/op/polar.cpp:2:+#include "openvino/op/concat.hpp"
src/frontends/pytorch/src/op/polar.cpp:4:+#include "openvino/op/cos.hpp"
src/frontends/pytorch/src/op/polar.cpp:5:+#include "openvino/op/multiply.hpp"
src/frontends/pytorch/src/op/polar.cpp:6:+#include "openvino/op/sin.hpp"
src/frontends/pytorch/src/op/polar.cpp:20:-    auto real = context.mark_node(std::make_shared<v1::Multiply>(abs,context.mark_node(std::make_shared<v0::Cos>(angle))));
src/frontends/pytorch/src/op/polar.cpp:21:-    auto imag = context.mark_node(std::make_shared<v1::Multiply>(abs,context.mark_node(std::make_shared<v0::Sin>(angle))));
src/frontends/pytorch/src/op/polar.cpp:20:+    auto real =
src/frontends/pytorch/src/op/polar.cpp:21:+        context.mark_node(std::make_shared<v1::Multiply>(abs, context.mark_node(std::make_shared<v0::Cos>(angle))));
src/frontends/pytorch/src/op/polar.cpp:22:+    auto imag =
src/frontends/pytorch/src/op/polar.cpp:23:+        context.mark_node(std::make_shared<v1::Multiply>(abs, context.mark_node(std::make_shared<v0::Sin>(angle))));
src/frontends/pytorch/src/op/randperm.cpp:1:-#include "openvino/op/topk.hpp"
src/frontends/pytorch/src/op/randperm.cpp:2:-#include "openvino/op/random_uniform.hpp"
src/frontends/pytorch/src/op/randperm.cpp:5:-#include "utils.hpp"
src/frontends/pytorch/src/op/randperm.cpp:3:+#include "openvino/op/random_uniform.hpp"
src/frontends/pytorch/src/op/randperm.cpp:5:+#include "openvino/op/topk.hpp"
src/frontends/pytorch/src/op/randperm.cpp:6:+#include "utils.hpp"
src/frontends/pytorch/src/op/randperm.cpp:23:-            OPENVINO_ASSERT(dtype_value == 4, "Only dtype value 4 (int64) is supported for aten::randperm, got: ", dtype_value);
src/frontends/pytorch/src/op/randperm.cpp:23:+            OPENVINO_ASSERT(dtype_value == 4,
src/frontends/pytorch/src/op/randperm.cpp:24:+                            "Only dtype value 4 (int64) is supported for aten::randperm, got: ",
src/frontends/pytorch/src/op/randperm.cpp:25:+                            dtype_value);
src/frontends/pytorch/src/op/randperm.cpp:28:-            OPENVINO_ASSERT(dtype_value == 4, "Only dtype value 4 (int64) is supported for aten::randperm, got: ", dtype_value);
src/frontends/pytorch/src/op/randperm.cpp:30:+            OPENVINO_ASSERT(dtype_value == 4,
src/frontends/pytorch/src/op/randperm.cpp:31:+                            "Only dtype value 4 (int64) is supported for aten::randperm, got: ",
src/frontends/pytorch/src/op/randperm.cpp:32:+                            dtype_value);
src/frontends/pytorch/src/op/randperm.cpp:31:-            PYTORCH_OP_CONVERSION_CHECK(false, "Unexpected number of inputs for aten::randperm: ", num_inputs);
src/frontends/pytorch/src/op/randperm.cpp:35:+        PYTORCH_OP_CONVERSION_CHECK(false, "Unexpected number of inputs for aten::randperm: ", num_inputs);
src/frontends/pytorch/src/op/randperm.cpp:34:-    return {context.mark_node(v0::Constant::create(element::i64, Shape{0},std::vector<int64_t>{}))};}
src/frontends/pytorch/src/op/randperm.cpp:38:+        return {context.mark_node(v0::Constant::create(element::i64, Shape{0}, std::vector<int64_t>{}))};
src/frontends/pytorch/src/op/randperm.cpp:39:+    }
src/frontends/pytorch/src/op/randperm.cpp:41:-    auto topk = context.mark_node(std::make_shared<v11::TopK>(random_tensor, k, axis, ov::op::TopKMode::MIN, ov::op::TopKSortType::SORT_VALUES, element::i64, false));
src/frontends/pytorch/src/op/randperm.cpp:46:+    auto topk = context.mark_node(std::make_shared<v11::TopK>(random_tensor,
src/frontends/pytorch/src/op/randperm.cpp:47:+                                                              k,
src/frontends/pytorch/src/op/randperm.cpp:48:+                                                              axis,
src/frontends/pytorch/src/op/randperm.cpp:49:+                                                              ov::op::TopKMode::MIN,
src/frontends/pytorch/src/op/randperm.cpp:50:+                                                              ov::op::TopKSortType::SORT_VALUES,
src/frontends/pytorch/src/op/randperm.cpp:51:+                                                              element::i64,
src/frontends/pytorch/src/op/randperm.cpp:52:+                                                              false));

@vijaykr338
Copy link
Contributor Author

additionally, for randperm produces random permutation, we can't expect the output of OpenVINO to match Pytorch's output, so instead we just verify if the output has correct shape and when sorted gives 0 to n-1

That approach is fine. A better approach would be to use randperm to randomize the tensor as this might be the useful usage of this function and since the output will not be the same between openvino and torch, sort the tensor and compare. That actually can be done inside a torch model, so regular _test function can be used. Could you implement such approach?

thank you for the insight, I updated the model to use randperm and sort it within the model itself to use the test function

I also separated the sin and cos for easier reading

device=x.device, pin_memory=False)
else:
raise ValueError("Invalid num_inputs")
sorted_p, _ = torch.sort(p)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My idea was to use randperm output to permute the actual tensor. BTW, x can be such tensor, otherwise it is unused variable here, and sort the tensor after permutation to verify that permutation works

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright, I updated it so that randperm's o/p shuffles the actual tensor, and then sorts it

please check

@vijaykr338
Copy link
Contributor Author

@mvafin
should I make any further changes?

Comment on lines 38 to 39
atol = 1e-4 if precision == "FP32" else 1e-3
rtol = 1e-4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
atol = 1e-4 if precision == "FP32" else 1e-3
rtol = 1e-4

you are not using atol and rtol here

Copy link
Contributor Author

@vijaykr338 vijaykr338 Mar 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yes, I have removed them now, I was initially using atol and rtol as I was worried about the floating point differences that might be produced, but the default test layer handles it fine!

added a comment to explain why we sort
@mvafin
Copy link
Contributor

mvafin commented Mar 28, 2025

build_jenkins

Copy link
Contributor

@mvafin mvafin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution. Everything seems to be okay now.

@vijaykr338
Copy link
Contributor Author

@mvafin Thank you very much for your patience and feedback throughout the process

I appreciate the valuable insights gained and will apply them to my future PRs

@vijaykr338
Copy link
Contributor Author

Hi @mvafin, are there any issues with the merging?

@mvafin
Copy link
Contributor

mvafin commented Mar 31, 2025

Hi @mvafin, are there any issues with the merging?

Don't worry about this. That is cause by the issues in our CI, I will look into it.

@vijaykr338
Copy link
Contributor Author

Hi @mvafin, are there any issues with the merging?

Don't worry about this. That is cause by the issues in our CI, I will look into it.

ah, alright, thanks for the update.

@mvafin
Copy link
Contributor

mvafin commented Mar 31, 2025

build_jenkins

@mvafin mvafin added this pull request to the merge queue Apr 1, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Apr 1, 2025
@mvafin mvafin added this pull request to the merge queue Apr 1, 2025
Merged via the queue into openvinotoolkit:master with commit f2182b3 Apr 1, 2025
185 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: PyTorch FE OpenVINO PyTorch Frontend ExternalPR External contributor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants